This work addresses the problem of generating 3D holistic body motions from human speech. Given a speech recording, we synthesize sequences of 3D body poses, hand gestures, and facial expressions that are realistic and diverse. To achieve this, we first build a high-quality dataset of 3D holistic body meshes with synchronous speech. We then define a novel speech-to-motion generation framework in which the face, body, and hands are modeled separately. The separated modeling stems from the fact that face articulation strongly correlates with human speech, while body poses and hand gestures are less correlated. Specifically, we employ an autoencoder for face motions, and a compositional vector-quantized variational autoencoder (VQ-VAE) for the body and hand motions. The compositional VQ-VAE is key to generating diverse results. Additionally, we propose a cross-conditional autoregressive model that generates body poses and hand gestures, leading to coherent and realistic motions. Extensive experiments and user studies demonstrate that our proposed approach achieves state-of-the-art performance both qualitatively and quantitatively. Our novel dataset and code will be released for research purposes at https://talkshow.is.tue.mpg.de.
translated by 谷歌翻译
Bird's-Eye-View (BEV) 3D Object Detection is a crucial multi-view technique for autonomous driving systems. Recently, plenty of works are proposed, following a similar paradigm consisting of three essential components, i.e., camera feature extraction, BEV feature construction, and task heads. Among the three components, BEV feature construction is BEV-specific compared with 2D tasks. Existing methods aggregate the multi-view camera features to the flattened grid in order to construct the BEV feature. However, flattening the BEV space along the height dimension fails to emphasize the informative features of different heights. For example, the barrier is located at a low height while the truck is located at a high height. In this paper, we propose a novel method named BEV Slice Attention Network (BEV-SAN) for exploiting the intrinsic characteristics of different heights. Instead of flattening the BEV space, we first sample along the height dimension to build the global and local BEV slices. Then, the features of BEV slices are aggregated from the camera features and merged by the attention mechanism. Finally, we fuse the merged local and global BEV features by a transformer to generate the final feature map for task heads. The purpose of local BEV slices is to emphasize informative heights. In order to find them, we further propose a LiDAR-guided sampling strategy to leverage the statistical distribution of LiDAR to determine the heights of local slices. Compared with uniform sampling, LiDAR-guided sampling can determine more informative heights. We conduct detailed experiments to demonstrate the effectiveness of BEV-SAN. Code will be released.
translated by 谷歌翻译
Recently, Bird's-Eye-View (BEV) representation has gained increasing attention in multi-view 3D object detection, which has demonstrated promising applications in autonomous driving. Although multi-view camera systems can be deployed at low cost, the lack of depth information makes current approaches adopt large models for good performance. Therefore, it is essential to improve the efficiency of BEV 3D object detection. Knowledge Distillation (KD) is one of the most practical techniques to train efficient yet accurate models. However, BEV KD is still under-explored to the best of our knowledge. Different from image classification tasks, BEV 3D object detection approaches are more complicated and consist of several components. In this paper, we propose a unified framework named BEV-LGKD to transfer the knowledge in the teacher-student manner. However, directly applying the teacher-student paradigm to BEV features fails to achieve satisfying results due to heavy background information in RGB cameras. To solve this problem, we propose to leverage the localization advantage of LiDAR points. Specifically, we transform the LiDAR points to BEV space and generate the foreground mask and view-dependent mask for the teacher-student paradigm. It is to be noted that our method only uses LiDAR points to guide the KD between RGB models. As the quality of depth estimation is crucial for BEV perception, we further introduce depth distillation to our framework. Our unified framework is simple yet effective and achieves a significant performance boost. Code will be released.
translated by 谷歌翻译
Vision-Centric Bird-Eye-View (BEV) perception has shown promising potential and attracted increasing attention in autonomous driving. Recent works mainly focus on improving efficiency or accuracy but neglect the domain shift problem, resulting in severe degradation of transfer performance. With extensive observations, we figure out the significant domain gaps existing in the scene, weather, and day-night changing scenarios and make the first attempt to solve the domain adaption problem for multi-view 3D object detection. Since BEV perception approaches are usually complicated and contain several components, the domain shift accumulation on multi-latent spaces makes BEV domain adaptation challenging. In this paper, we propose a novel Multi-level Multi-space Alignment Teacher-Student ($M^{2}ATS$) framework to ease the domain shift accumulation, which consists of a Depth-Aware Teacher (DAT) and a Multi-space Feature Aligned (MFA) student model. Specifically, DAT model adopts uncertainty guidance to sample reliable depth information in target domain. After constructing domain-invariant BEV perception, it then transfers pixel and instance-level knowledge to student model. To further alleviate the domain shift at the global level, MFA student model is introduced to align task-relevant multi-space features of two domains. To verify the effectiveness of $M^{2}ATS$, we conduct BEV 3D object detection experiments on four cross domain scenarios and achieve state-of-the-art performance (e.g., +12.6% NDS and +9.1% mAP on Day-Night). Code and dataset will be released.
translated by 谷歌翻译
最近,深层神经网络(DNNS)用于减少带宽并提高互联网视频交付的质量。现有的方法训练服务器上每个视频块的相应内容超级分辨率(SR)模型,并将低分辨率(LR)视频块以及SR模型一起流到客户端。尽管他们取得了令人鼓舞的结果,但网络培训的巨大计算成本限制了其实际应用。在本文中,我们提出了一种名为有效元调整(EMT)的方法,以降低计算成本。 EMT没有从头开始训练,而是将元学习的模型适应了输入视频的第一部分。至于以下块,它通过以前的改编模型的梯度掩盖选择了部分参数。为了实现EMT的进一步加速,我们提出了一种新颖的抽样策略,以从视频帧中提取最具挑战性的补丁。拟议的策略高效,带来了可忽略的额外成本。我们的方法大大降低了计算成本并取得更好的性能,为将神经视频传递技术应用于实际应用铺平了道路。我们基于各种有效的SR架构进行了广泛的实验,包括ESPCN,SRCNN,FSRCNN和EDSR-1,证明了我们工作的概括能力。该代码通过\ url {https://github.com/neural-video-delivery/emt-pytorch-eccv2022}发布。
translated by 谷歌翻译
由于计算的未来是异质的,因此可伸缩性是单图超分辨率的关键问题。最近的工作尝试训练一个网络,该网络可以部署在具有不同能力的平台上。但是,他们依靠像素稀疏卷积,这不是硬件友好,并且实现了有限的实际加速。由于可以将图像分为各种恢复困难的斑块,因此我们提出了一种基于自适应贴片(APE)的可扩展方法,以实现更实用的加速。具体而言,我们建议训练回归器,以预测贴片每一层的增量能力。一旦增量容量低于阈值,贴片就可以在特定层中退出。我们的方法可以通过改变增量容量的阈值来轻松调整性能和效率之间的权衡。此外,我们提出了一种新的策略,以实现我们方法的网络培训。我们在各种骨架,数据集和缩放因素上进行了广泛的实验,以证明我们方法的优势。代码可从https://github.com/littlepure2333/ape获得
translated by 谷歌翻译
参考图像分割旨在通过自然语言表达段段。在文本和图像之间的不同数据属性中,对网络充满良好的对齐文本和像素级别特征是具有挑战性的。现有方法使用借预制模型来促进学习,但分别从预磨料模型转移语言/视觉知识,忽略多模态对应信息。灵感来自最近对比语言 - 图像预测(剪辑)的预先推进(剪辑),在本文中,我们提出了一个端到端的剪辑驱动的参考图像分割框架(CRIS)。有效地转移多模态知识,克里斯语言解码和对比学习来实现文本到像素对齐的对比学习。更具体地,我们设计了一种视觉语言解码器,以将微粒语义信息从文本表示传播到每个像素级激活,这促进了两个模态之间的一致性。此外,我们呈现文本到像素对比学学习,明确强制执行类似于相关像素级别特征的文本特征,并与无关相似。三个基准数据集的实验结果表明,我们的拟议框架显着优于现有的性能而无需任何后处理。代码将被释放。
translated by 谷歌翻译
在城市环境中面对道路选项问题时,现有的仿制学习方法遭受了低效率和泛化能力。在本文中,我们提出了一种横摆引导的仿制学习方法,以提高端到端自主驾驶范式的道路选择性能,就利用培训样本和对不断变化的环境的适应性而言。具体地,偏航信息由导航图的轨迹提供。我们的端到端架构,偏航引导模仿学习与Resnet34注意(YILRATT),集成了Resnet34主干和注意机制,以获得准确的感知。它不需要高精度地图,并且在给定由消费级GPS接收器提供的偏航信息的情况下实现完全端到端的自主驱动。通过分析注意热图,我们可以揭示决策和场景感知之间的一些因果关系,特别是故障情况是由错误的感知引起的。我们在Carla 0.9.11模拟器中收集专家体验,并改善基准科尔2017和NOCRASH。实验结果表明,伊利拉特比SOTA CILRS的成功率较高26.27%。代码,数据集,基准和实验结果可以在https://github.com/yandong024/yaw-guiding -il.git找到
translated by 谷歌翻译
Modern machine learning suffers from catastrophic forgetting when learning new classes incrementally. The performance dramatically degrades due to the missing data of old classes. Incremental learning methods have been proposed to retain the knowledge acquired from the old classes, by using knowledge distilling and keeping a few exemplars from the old classes. However, these methods struggle to scale up to a large number of classes. We believe this is because of the combination of two factors: (a) the data imbalance between the old and new classes, and (b) the increasing number of visually similar classes. Distinguishing between an increasing number of visually similar classes is particularly challenging, when the training data is unbalanced. We propose a simple and effective method to address this data imbalance issue. We found that the last fully connected layer has a strong bias towards the new classes, and this bias can be corrected by a linear model. With two bias parameters, our method performs remarkably well on two large datasets: ImageNet (1000 classes) and MS-Celeb-1M (10000 classes), outperforming the state-of-the-art algorithms by 11.1% and 13.2% respectively.
translated by 谷歌翻译
This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge show the superiority of A-Softmax loss in FR tasks. The code has also been made publicly available 1 .
translated by 谷歌翻译